Sort by
Sensitivity analysis on stormwater management response to land cover dynamics and urban expansion of developing City in Lake Hawassa watershed, Ethiopia

AbstractHawassa is a rapidly developing city in Lake Hawassa watershed of Ethiopia. Analyzing the effect of land cover dynamics on surface runoff remains imperative to adaptive urban stormwater management. This study quantified spatial variation of land cover and sensitivity of stormwater management response. Historical 30 years of daily annual rainfall, three satellite imageries, DEM, and hydrological soil group data were analyzed. A statistical‐based combined approach of geospatial techniques and Soil Conservation Service‐Curve Number (SCS‐CN) model was employed. CN and surface runoff depth for the delineated urban watersheds were determined. The result revealed that the built‐up area increased by 30.9 km2, where the rate varies spatially. The variation of impervious land cover explains 58.6% of change in CN with coefficient of 0.352. While CN is inversely correlated with agricultural and vegetation land cover variations. The finding suggests CN explains 96.78% of the change in surface runoff with a significant correlation coefficient of 3.91. The proposed integrated model approach justifies the potential to reorganize the relationship between the spatial effect of land cover variation on surface runoff at the urban watersheds. Thus, suitable local‐specific solutions can be devised for effective management of flood risk and optimize the drainage system of urban areas.

Open Access Just Published
Relevant
A framework for modelling the probability of flooding under levee breaching

AbstractLevees aim to provide protection during floods, however, these structures can breach, causing significant damage. Flood maps that include levee breaching are often limited to deterministic methods. Where probabilistic breaching is done, it often requires computationally expensive Monte Carlo simulations and an understanding of the geotechnical levee properties that are often limited. In this paper, we combine existing fragility curves and empirical breaching equations with a framework for automating levee breaching in catchments with limited geotechnical information. This method can be adapted and applied to existing 2D flood models to determine the probability of inundation, given that a breach occurs. This ultimately allows for greater emergency responses and land use planning to reduce the flood risk faced by our communities. The method was applied to four case study catchments. The results showed that including levee breaching in one catchment led to an average increase in the inundated area by 48.2% and a tripling in the potentially exposed area. However, breaching in some locations reduced the inundation extent by 12%, illustrating the potential for fuse plug levees and floodways as a flood mitigation strategy. This strategy has seen successful usage internationally. Further investigation is recommended to consider whether these mitigation strategies should be enacted.

Open Access Just Published
Relevant
Using multi‐criteria decision‐making methods in prioritizing structural flood control solutions: A case study from Iran

AbstractEffective management of flood risks requires the prioritization of appropriate flood control solutions. This study aims to prioritize structural flood control options using multi‐criteria decision‐making (MCDM) methods. Four MCDM methods, namely analytic hierarchy process, technique for order preference by similarity to ideal solution, multi‐criteria optimization and compromise solution, and Fuzzy‐VIKOR are employed to assess and rank the flood control options based on multiple criteria. Field surveys, interviews with local authorities and experts, and on‐site assessments of existing flood control structures constituted the primary data collection methods. The findings demonstrate the effectiveness of reservoir dams, retention basins, and levees as viable solutions. Conversely, flood control gates and the no‐project options were assigned lower priorities. The findings highlight the importance of considering multiple MCDM methods to account for variations in rankings. The study provides valuable insights into the decision‐making process for prioritizing flood control options in the study area. These findings can assist policymakers and stakeholders in effectively allocating resources and implementing appropriate structural flood control measures to mitigate flood risks.

Open Access Just Published
Relevant
Overcoming data utilization challenges for built environment flood resilience: Strategies and best practices

AbstractBuilt environment flood resilience is a critical challenge facing communities worldwide. Amongst various efforts to resilience, the conception towards data utilization becomes popular with enormous technological advancements. Built environment creates varieties of data at larger volumes throughout their life cycle signifying that the importance of these data in the context of flood resilience cannot be ignored. However, despite the power of data, the greatest opportunities that exist for flood resilience enhancement have been mired by numerous and complex unidentified challenges. Thus, identifying these challenges with timely relevant strategies is a significant need. One of the best ways to tackle these challenges is by viewing them through the lens of data life cycle stages. This study, therefore, aimed to identify these challenges in each stage of the data life cycle with strategies to overcome them. Semi‐structured interviews conducted with 12 experts revealed the significant challenges allied with built environment data with potential future strategies. The qualitative content analysis was conducted to analyse the findings. The use of advanced sensing technologies, cloud‐based storage solutions, data governance policies and the development of predictive models are some of the consequential strategies outlined in this study. These findings provide valuable insights and guidance to facilitate built environment data utilization for flood resilience.

Open Access
Relevant
Remote sensing‐based mapping of structural building damage in the Ahr valley

AbstractFlood damage data are needed for various applications. Structural damage of buildings can reflect not only the economic damage but also the life‐threatening condition of a building, which provide crucial information for disaster response and recovery. Since traditional on‐site data collection shortly after a disaster is challenging, remote sensing data can be of great help, cover a wider area and be deployed earlier in time than on‐site surveys. However, this has its challenges and limitations. We elucidate on that by presenting two case studies from flash floods in Germany. First, we assessed the reliability of an existing flood damage schema, which differentiates from minor (structural) damage to complete building collapse. We compared two on‐site raters of the 2016 Braunsbach flood, reaching an excellent level of reliability. Second, we mapped structural building damage after the flood in the Ahr valley in 2021 using a textured 3D mesh and orthophotos. Here, we evaluated the remote sense‐based damage mapping done by three raters. Although the heterogeneity of ratings using remote sensing data is larger than among on‐site ratings, we consider it fit‐for‐purpose when compared with on‐site mapping, especially for event documentation and as basis for financial damage estimation and less complex numerical modelling.

Open Access
Relevant
Quantifying compound flood event uncertainties in a wave and tidally dominated coastal region: The impacts of copula selection, sampling, record length, and precipitation gauge selection

AbstractCoastal flooding is a growing hazard. Compound event characterization and uncertainty quantification are critical to accurate flood risk assessment. This study presents univariate, conditional, and joint probabilities for observed water levels, precipitation, and waves. Design events for 10‐ and 100‐year marine water level and precipitation events are developed. A total water level formulation explicitly accounting for wave impacts is presented. Uncertainties associated with sampling method, copula selection, data record length, and utilized rainfall gauge are determined. Eight copulas are used to quantify multivariate uncertainty. Generally, copulas present similar results, except the BB5. Sampling method uncertainty was quantified using four sampling types; annual maximum, annual coinciding, wet season monthly maximum, and wet season monthly coinciding sampling. Annual coinciding sampling typically produced the lowest event magnitude estimates. Uncertainty associated with record length was explored by partitioning a 100‐year record into various subsets. Withholding 30 years of observations (i.e., records of less than 70 years) resulted in substantial variability of both the 10‐ and 100‐year return period estimates. Approximately equidistant rainfall gauges led to large event estimate differences, suggesting microclimatology and gauge selection play a key role in characterizing compound events. Generally, event estimate uncertainty was dominated by sampling method and rainfall gauge selection.

Open Access
Relevant
A comparative spatial analysis of flood susceptibility mapping using boosting machine learning algorithms in Rathnapura, Sri Lanka

AbstractIdentifying flood‐prone areas is essential for preventing floods, reducing risks, and making informed decisions. A spatial database with 595 flood inventory and 13 flood predictors were used to implement five boosting algorithms: gradient boosting machine (GBM), extreme gradient boosting, categorical boosting, logit boost, and light gradient boosting machine (LGBM) to map flood susceptibility in Rathnapura while evaluating trained model's generalizing ability and assessing the feature importance in flood susceptibility mapping (FSM). The model performance was evaluated using the F1‐score, kappa index, and area under curve (AUC) method. The findings revealed that all the models were effective in identifying the overall flood susceptibility trends while LightGBM model had superior results (F1‐score = 0.907, Kappa value = 0.813 and AUC = 0.970), securing the top scores across all performance metrics compared to the other models (for testing dataset). Based on kappa evaluation, most of the models had finer performance (AUC min = 0.737) while LightGBM had moderate performance for predictions beyond the training region. According to the results, regions with lower altitudes and topographic roughness values, moderate rainfall, and proximity to rivers are more susceptible to flooding. This framework can be adapted for rapid FSM in data‐deficient regions.

Open Access
Relevant
Beyond a fixed number: Investigating uncertainty in popular evaluation metrics of ensemble flood modeling using bootstrapping analysis

AbstractEvaluation of the performance of flood models is a crucial step in the modeling process. Considering the limitations of single statistical metrics, such as uncertainty bounds, Nash Sutcliffe efficiency, Kling Gupta efficiency, and the coefficient of determination, which are widely used in the model evaluation, the inherent properties and sampling uncertainty in these metrics are demonstrated. A comprehensive evaluation is conducted using an ensemble of one‐dimensional Hydrologic Engineering Center's River Analysis System (HEC‐RAS) models, which account for the uncertainty associated with the channel roughness and upstream flow input, of six reaches located in Indiana and Texas of the United States. Specifically, the effects of different prior distributions of the uncertainty sources, multiple high‐flow scenarios, and various types of measurement errors in observations on the evaluation metrics are investigated using bootstrapping. Results show that the model performances based on the uniform and normal priors are comparable. The statistical distributions of all the evaluation metrics in this study are significantly different under different high‐flow scenarios, thus suggesting that the metrics should be treated as “random” variables due to both aleatory and epistemic uncertainties and conditioned on the specific flow periods of interest. Additionally, the white‐noise error in observations has the least impact on the metrics.

Open Access
Relevant